DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:
The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval.
The train.csv data set provided by DonorsChoose contains the following features:
| Feature | Description |
|---|---|
project_id |
A unique identifier for the proposed project. Example: p036502 |
project_title |
Title of the project. Examples:
|
project_grade_category |
Grade level of students for which the project is targeted. One of the following enumerated values:
|
project_subject_categories |
One or more (comma-separated) subject categories for the project from the following enumerated list of values:
Examples:
|
school_state |
State where school is located (Two-letter U.S. postal code). Example: WY |
project_subject_subcategories |
One or more (comma-separated) subject subcategories for the project. Examples:
|
project_resource_summary |
An explanation of the resources needed for the project. Example:
|
project_essay_1 |
First application essay* |
project_essay_2 |
Second application essay* |
project_essay_3 |
Third application essay* |
project_essay_4 |
Fourth application essay* |
project_submitted_datetime |
Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245 |
teacher_id |
A unique identifier for the teacher of the proposed project. Example: bdf8baa8fedef6bfeec7ae4ff1c15c56 |
teacher_prefix |
Teacher's title. One of the following enumerated values:
|
teacher_number_of_previously_posted_projects |
Number of project applications previously submitted by the same teacher. Example: 2 |
* See the section Notes on the Essay Data for more details about these features.
Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:
| Feature | Description |
|---|---|
id |
A project_id value from the train.csv file. Example: p036502 |
description |
Desciption of the resource. Example: Tenor Saxophone Reeds, Box of 25 |
quantity |
Quantity of the resource required. Example: 3 |
price |
Price of the resource required. Example: 9.95 |
Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all resources needed for a project:
The data set contains the following label (the value you will attempt to predict):
| Label | Description |
|---|---|
project_is_approved |
A binary flag indicating whether DonorsChoose approved the project. A value of 0 indicates the project was not approved, and a value of 1 indicates the project was approved. |
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os
import sys
from sklearn import tree
from plotly import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
print("Number of data points in train data", resource_data.shape)
print(resource_data.columns.values)
resource_data.head(2)
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
# join two dataframes in python:
project_data = pd.merge(project_data, price_data, on='id', how='left')
project_subject_categories¶catogories = list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
from collections import Counter
my_counter = Counter()
for word in project_data['clean_categories'].values:
my_counter.update(word.split())
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kv: kv[1]))
project_subject_subcategories¶sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
my_counter = Counter()
for word in project_data['clean_subcategories'].values:
my_counter.update(word.split())
sub_cat_dict = dict(my_counter)
sorted_sub_cat_dict = dict(sorted(sub_cat_dict.items(), key=lambda kv: kv[1]))
# merge two column text dataframe:
project_data["essay"] = project_data["project_essay_1"].map(str) +\
project_data["project_essay_2"].map(str) + \
project_data["project_essay_3"].map(str) + \
project_data["project_essay_4"].map(str)
project_data.head(2)
#### 1.4.2.3 Using Pretrained Models: TFIDF weighted W2V
# printing some random reviews
print(project_data['essay'].values[0])
print("="*50)
print(project_data['essay'].values[150])
print("="*50)
print(project_data['essay'].values[1000])
print("="*50)
print(project_data['essay'].values[20000])
print("="*50)
print(project_data['essay'].values[99999])
print("="*50)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"]
# Combining all the above stundents
from tqdm import tqdm
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['essay'].values):
sent = sentance.lower()
sent = decontracted(sent)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopwords)
preprocessed_essays.append(sent.strip())
# after preprocesing
preprocessed_essays[20000]
# Updating dataframe for clean project title and remove old project title
project_data['clean_essay'] = preprocessed_essays
project_data.drop(['essay'], axis=1, inplace=True)
project_data.head(2)
# similarly you can preprocess the titles also
# Combining all the above stundents
from tqdm import tqdm
preprocessed_title = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['project_title'].values):
sent = sentance.lower()
sent = decontracted(sent)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopwords)
preprocessed_title.append(sent.strip())
# after preprocesing
preprocessed_title[20000]
# Updating dataframe for clean project title and remove old project title
project_data['clean_project_title'] = preprocessed_title
project_data.drop(['project_title'], axis=1, inplace=True)
project_data.head(2)
# similarly you can preprocess the project_grade also
# Combining all the above stundents
from tqdm import tqdm
preprocessed_grade = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['project_grade_category'].values):
sent = sentance.lower()
sent = decontracted(sent)
sent = sent.replace(' ', '_')
sent = sent.replace('-', '_')
# https://gist.github.com/sebleier/554280
# sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_grade.append(sent.strip())
preprocessed_grade[:10]
# Updating dataframe for clean project title and remove old project title
project_data.drop(['project_grade_category'], axis=1, inplace=True)
project_data['project_grade_category'] = preprocessed_grade
project_data.head(2)
# remove unnecessary column: https://cmdlinetips.com/2018/04/how-to-drop-one-or-more-columns-in-pandas-dataframe/
project_data = project_data.drop(['Unnamed: 0','id','teacher_id','project_submitted_datetime', \
'project_essay_1','project_essay_2','project_essay_3','project_essay_4', \
'project_resource_summary'], axis=1)
project_data.head()
project_data['teacher_prefix'].isnull().values.any()
project_data['school_state'].isnull().values.any()
project_data['teacher_number_of_previously_posted_projects'].isnull().values.any()
project_data['project_is_approved'].isnull().values.any()
project_data['price'].isnull().values.any()
project_data['quantity'].isnull().values.any()
project_data['clean_categories'].isnull().values.any()
project_data['clean_subcategories'].isnull().values.any()
project_data['clean_essay'].isnull().values.any()
project_data['clean_project_title'].isnull().values.any()
project_data['project_grade_category'].isnull().values.any()
project_data['teacher_prefix'].isnull().sum().sum()
project_data.columns
we are going to consider
- school_state : categorical data
- clean_categories : categorical data
- clean_subcategories : categorical data
- project_grade_category : categorical data
- teacher_prefix : categorical data
- project_title : text data
- text : text data
- project_resource_summary: text data (optinal)
- quantity : numerical (optinal)
- teacher_number_of_previously_posted_projects : numerical
- price : numerical
# we use count vectorizer to convert the values into one
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(vocabulary=list(sorted_cat_dict.keys()), lowercase=False, binary=True)
categories_one_hot = vectorizer.fit_transform(project_data['clean_categories'].values)
print(vectorizer.get_feature_names())
print("Shape of matrix after one hot encodig ",categories_one_hot.shape)
# we use count vectorizer to convert the values into one
vectorizer = CountVectorizer(vocabulary=list(sorted_sub_cat_dict.keys()), lowercase=False, binary=True)
sub_categories_one_hot = vectorizer.fit_transform(project_data['clean_subcategories'].values)
print(vectorizer.get_feature_names())
print("Shape of matrix after one hot encodig ",sub_categories_one_hot.shape)
# you can do the similar thing with state, teacher_prefix and project_grade_category also
# We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer = CountVectorizer(min_df=10)
text_bow = vectorizer.fit_transform(preprocessed_essays)
print("Shape of matrix after one hot encodig ",text_bow.shape)
# you can vectorize the title also
# before you vectorize the title make sure you preprocess it
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=10)
text_tfidf = vectorizer.fit_transform(preprocessed_essays)
print("Shape of matrix after one hot encodig ",text_tfidf.shape)
'''
# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039
def loadGloveModel(gloveFile):
print ("Loading Glove Model")
f = open(gloveFile,'r', encoding="utf8")
model = {}
for line in tqdm(f):
splitLine = line.split()
word = splitLine[0]
embedding = np.array([float(val) for val in splitLine[1:]])
model[word] = embedding
print ("Done.",len(model)," words loaded!")
return model
model = loadGloveModel('glove.42B.300d.txt')
# ============================
Output:
Loading Glove Model
1917495it [06:32, 4879.69it/s]
Done. 1917495 words loaded!
# ============================
words = []
for i in preproced_texts:
words.extend(i.split(' '))
for i in preproced_titles:
words.extend(i.split(' '))
print("all the words in the coupus", len(words))
words = set(words)
print("the unique words in the coupus", len(words))
inter_words = set(model.keys()).intersection(words)
print("The number of words that are present in both glove vectors and our coupus", \
len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")
words_courpus = {}
words_glove = set(model.keys())
for i in words:
if i in words_glove:
words_courpus[i] = model[i]
print("word 2 vec length", len(words_courpus))
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
import pickle
with open('glove_vectors', 'wb') as f:
pickle.dump(words_courpus, f)
'''
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors.append(vector)
print(len(avg_w2v_vectors))
print(len(avg_w2v_vectors[0]))
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model = TfidfVectorizer()
tfidf_model.fit(preprocessed_essays)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors.append(vector)
print(len(tfidf_w2v_vectors))
print(len(tfidf_w2v_vectors[0]))
# Similarly you can vectorize for title also
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
project_data = pd.merge(project_data, price_data, on='id', how='left')
# check this one: https://www.youtube.com/watch?v=0HOqOcln3Z4&t=530s
# standardization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import StandardScaler
# price_standardized = standardScalar.fit(project_data['price'].values)
# this will rise the error
# ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# Reshape your data either using array.reshape(-1, 1)
price_scalar = StandardScaler()
price_scalar.fit(project_data['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
price_standardized = price_scalar.transform(project_data['price'].values.reshape(-1, 1))
price_standardized
print(categories_one_hot.shape)
print(sub_categories_one_hot.shape)
print(text_bow.shape)
print(price_standardized.shape)
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
from scipy.sparse import hstack
# with the same hstack function we are concatinating a sparse matrix and a dense matirx :)
X = hstack((categories_one_hot, sub_categories_one_hot, text_bow, price_standardized))
X.shape
# please write all the code with proper documentation, and proper titles for each subsection
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
Computing Sentiment Scores
import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# import nltk
# nltk.download('vader_lexicon')
sid = SentimentIntensityAnalyzer()
for_sentiment = 'a person is a person no matter how small dr seuss i teach the smallest students with the biggest enthusiasm \
for learning my students learn in many different ways using all of our senses and multiple intelligences i use a wide range\
of techniques to help all my students succeed students in my class come from a variety of different backgrounds which makes\
for wonderful sharing of experiences and cultures including native americans our school is a caring community of successful \
learners which can be seen through collaborative student project based learning in and out of the classroom kindergarteners \
in my class love to work with hands on materials and have many different opportunities to practice a skill before it is\
mastered having the social skills to work cooperatively with friends is a crucial aspect of the kindergarten curriculum\
montana is the perfect place to learn about agriculture and nutrition my students love to role play in our pretend kitchen\
in the early childhood classroom i have had several kids ask me can we try cooking with real food i will take their idea \
and create common core cooking lessons where we learn important math and writing concepts while cooking delicious healthy \
food for snack time my students will have a grounded appreciation for the work that went into making the food and knowledge \
of where the ingredients came from as well as how it is healthy for their bodies this project would expand our learning of \
nutrition and agricultural cooking recipes by having us peel our own apples to make homemade applesauce make our own bread \
and mix up healthy plants from our classroom garden in the spring we will also create our own cookbooks to be printed and \
shared with families students will gain math and literature skills as well as a life long enjoyment for healthy cooking \
nannan'
ss = sid.polarity_scores(for_sentiment)
for k in ss:
print('{0}: {1}, '.format(k, ss[k]), end='')
# we can use these 4 things as features/attributes (neg, neu, pos, compound)
# neg: 0.0, neu: 0.753, pos: 0.247, compound: 0.93

# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
# Combine the train.csv and resource.csv
# https://stackoverflow.com/questions/22407798/how-to-reset-a-dataframes-indexes-for-all-groups-in-one-step
from sklearn.model_selection import train_test_split
# https://www.geeksforgeeks.org/python-pandas-dataframe-sample/
# Take 50k dataset
project_data = project_data.sample(n=50000)
# Remove that row which contain NaN. We observed that only 3 rows that contain NaN
project_data = project_data[pd.notnull(project_data['teacher_prefix'])]
project_data.shape
project_data.head(2)
# Split train and test
tr_X, ts_X, tr_y, ts_y, = train_test_split(project_data, project_data['project_is_approved'].values, test_size=0.33, random_state=1, stratify=project_data['project_is_approved'].values)
tr_X = tr_X.reset_index(drop=True)
ts_X = ts_X.reset_index(drop=True)
# After train data, We are going to perform KFold Cross validation at the time of training model
# Reset index of df
tr_X = tr_X.reset_index(drop=True)
ts_X = ts_X.reset_index(drop=True)
tr_X.drop(['project_is_approved'], axis=1, inplace=True)
ts_X.drop(['project_is_approved'], axis=1, inplace=True)
print('Shape of train data:', tr_X.shape)
print('Shape of test data:', ts_X.shape)
print('Shape of Train Data',[tr_X.shape, tr_y.shape])
print('Shape of Test Data',[ts_X.shape, ts_y.shape])
# # please write all the code with proper documentation, and proper titles for each subsection
# # go through documentations and blogs before you start coding
# # first figure out what to do, and then think about how to do.
# # reading and understanding error messages will be very much helpfull in debugging your code
# # make sure you featurize train and test data separatly
# # when you plot any graph make sure you use
# # a. Title, that describes your plot, this will be very helpful to the reader
# # b. Legends if needed
# # c. X-axis label
# # d. Y-axis label
# # For Numerical with train data
# ### 1) quantity
from sklearn.preprocessing import Normalizer
# # normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html
quantity_scalar = Normalizer()
quantity_scalar.fit(tr_X['quantity'].values.reshape(1,-1)) # finding the mean and standard deviation of this data
quantity_normalized = quantity_scalar.transform(tr_X['quantity'].values.reshape(1, -1))
# ### 2) price
# # the cost feature is already in numerical values, we are going to represent the money, as numerical values within the range 0-1
price_scalar = Normalizer()
price_scalar.fit(tr_X['price'].values.reshape(1,-1)) # finding the mean and standard deviation of this data
price_normalized = price_scalar.transform(tr_X['price'].values.reshape(1, -1))
# ### 3) For teacher_number_of_previously_projects
# # We are going to represent the teacher_number_of_previously_posted_projects, as numerical values within the range 0-1
teacher_number_of_previously_posted_projects_scalar = Normalizer()
teacher_number_of_previously_posted_projects_scalar.fit(tr_X['teacher_number_of_previously_posted_projects'].values.reshape(1,-1)) # finding the mean and standard deviation of this data
teacher_number_of_previously_posted_projects_normalized = teacher_number_of_previously_posted_projects_scalar.transform(tr_X['teacher_number_of_previously_posted_projects'].values.reshape(1,-1))
print('Shape of quantity:', quantity_normalized.T.shape)
print('Shape of price:', price_normalized.T.shape)
print('Shape of teacher_number_of_previously_posted_projects:', teacher_number_of_previously_posted_projects_normalized.T.shape)
# # Transform numerical attributes for test data
ts_price = price_scalar.transform(ts_X['price'].values.reshape(1,-1))
ts_quantity = quantity_scalar.transform(ts_X['quantity'].values.reshape(1,-1))
ts_teacher_number_of_previously_posted_projects = \
teacher_number_of_previously_posted_projects_scalar.transform(ts_X['teacher_number_of_previously_posted_projects'].\
values.reshape(1,-1))
print('--------------Test data--------------')
print('Shape of quantity:', ts_quantity.T.shape)
print('Shape of price:', ts_price.T.shape)
print('Shape of teacher_number_of_previously_posted_projects:', ts_teacher_number_of_previously_posted_projects.T.shape)
# For categorical with train data
# Please do the similar feature encoding with state, teacher_prefix and project_grade_category also
# One hot encoding for school state
### 1) school_state
print('==================================================================\n')
# Count Vectorize with vocuabulary contains unique code of school state and we are doing boolen BoW
vectorizer_school_state = CountVectorizer(vocabulary=tr_X['school_state'].unique(), lowercase=False, binary=True)
vectorizer_school_state.fit(tr_X['school_state'].values)
print('List of feature in school_state',vectorizer_school_state.get_feature_names())
# Transform train data
school_state_one_hot = vectorizer_school_state.transform(tr_X['school_state'].values)
print("\nShape of school_state matrix after one hot encoding ",school_state_one_hot.shape)
### 2) project_subject_categories
print('==================================================================\n')
vectorizer_categories = CountVectorizer(vocabulary=list(sorted_cat_dict.keys()), lowercase=False, binary=True)
vectorizer_categories.fit(tr_X['clean_categories'].values)
print('List of features in project_subject_categories',vectorizer_categories.get_feature_names())
# Transform train data
categories_one_hot = vectorizer_categories.transform(tr_X['clean_categories'].values)
print("\nShape of project_subject_categories matrix after one hot encodig ",categories_one_hot.shape)
### 3) project_subject_subcategories
print('==================================================================\n')
vectorizer_subcategories = CountVectorizer(vocabulary=list(sorted_sub_cat_dict.keys()), lowercase=False, binary=True)
vectorizer_subcategories.fit(tr_X['clean_subcategories'].values)
print('List of features in project_subject_categories',vectorizer_subcategories.get_feature_names())
# Transform train data
subcategories_one_hot = vectorizer_subcategories.transform(tr_X['clean_subcategories'].values)
print("\nShape of project_subject_subcategories matrix after one hot encodig ",subcategories_one_hot.shape)
### 4) project_grade_category
print('==================================================================\n')
# One hot encoding for project_grade_category
# Count Vectorize with vocuabulary contains unique code of project_grade_category and we are doing boolen BoW
vectorizer_grade_category = CountVectorizer(vocabulary=tr_X['project_grade_category'].unique(), lowercase=False, binary=True)
vectorizer_grade_category.fit(tr_X['project_grade_category'].values)
print('List of features in project_grade_category',vectorizer_grade_category.get_feature_names())
# Transform train data
project_grade_category_one_hot = vectorizer_grade_category.transform(tr_X['project_grade_category'].values)
print("\nShape of project_grade_category matrix after one hot encodig ",project_grade_category_one_hot.shape)
### 5) teacher_prefix
print('==================================================================\n')
# One hot encoding for teacher_prefix
# Count Vectorize with vocuabulary contains unique code of teacher_prefix and we are doing boolen BoW
# Since some of the data is filled with nan. So we update the nan to 'None' as a string
# tr_X['teacher_prefix'] = tr_X['teacher_prefix'].fillna('None')
vectorizer_teacher_prefix = CountVectorizer(vocabulary=tr_X['teacher_prefix'].unique(), lowercase=False, binary=True)
vectorizer_teacher_prefix.fit(tr_X['teacher_prefix'].values)
print('List of features in teacher_prefix',vectorizer_teacher_prefix.get_feature_names())
# Transform train data
teacher_prefix_one_hot = vectorizer_teacher_prefix.transform(tr_X['teacher_prefix'].values)
print("\nShape of teacher_prefix matrix after one hot encoding ",teacher_prefix_one_hot.shape)
# Transform categorical for test data
ts_school_state = vectorizer_school_state.transform(ts_X['school_state'].values)
ts_project_subject_category = vectorizer_categories.transform(ts_X['clean_categories'].values)
ts_project_subject_subcategory = vectorizer_subcategories.transform(ts_X['clean_subcategories'].values)
ts_project_grade_category = vectorizer_grade_category.transform(ts_X['project_grade_category'].values)
ts_teacher_prefix = vectorizer_teacher_prefix.transform(ts_X['teacher_prefix'].values)
print('--------------Test data--------------')
print('Shape of school_state:', ts_school_state.shape)
print('Shape of project_subject_categories:', ts_project_subject_category.shape)
print('Shape of project_subject_subcategories:', ts_project_subject_subcategory.shape)
print('Shape of project_grade_category:', ts_project_grade_category.shape)
print('Shape of teacher_prefix:', ts_teacher_prefix.shape)
list_features = ['quantity','price','teacher_number_of_previously_posted_projects'] # storing all features names for further print feature importance
for i in range(len(vectorizer_school_state.get_feature_names())):
list_features.append(vectorizer_school_state.get_feature_names()[i])
for i in range(len(vectorizer_categories.get_feature_names())):
list_features.append(vectorizer_categories.get_feature_names()[i])
for i in range(len(vectorizer_subcategories.get_feature_names())):
list_features.append(vectorizer_subcategories.get_feature_names()[i])
for i in range(len(vectorizer_grade_category.get_feature_names())):
list_features.append(vectorizer_grade_category.get_feature_names()[i])
for i in range(len(vectorizer_teacher_prefix.get_feature_names())):
list_features.append(vectorizer_teacher_prefix.get_feature_names()[i])
len(list_features)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# make sure you featurize train and test data separatly
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
We already have preprocessed both essay and project_title in Text processing section (1.3 and 1.4) above
Apply Decision Tree on different kind of featurization as mentioned in the instructions
For Every model that you work on make sure you do the step 2 and step 3 of instrucations
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
### BoW in Essay and Title on Train
# # We are considering only the bigram words which appeared in at least 10 documents with max feature = 5000(rows or projects).
vectorizer_bow = CountVectorizer(min_df=10, max_features=5000)
tr_essay = vectorizer_bow.fit_transform(tr_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on train",tr_essay.shape)
# # Similarly you can vectorize for title also
vectorizer_bowt = CountVectorizer(min_df=10, max_features=5000)
tr_title = vectorizer_bowt.fit_transform(tr_X['clean_project_title'].values)
print("Shape of title matrix after one hot encodig ",tr_title.shape)
### BoW in Essay and Title on Test
print('===========================================================\n')
ts_essay = vectorizer_bow.transform(ts_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on test",ts_essay.shape)
ts_title = vectorizer_bowt.transform(ts_X['clean_project_title'].values)
print("Shape of title matrix after one hot encodig on test",ts_title.shape)
print('Shape of normalized essay in train data', tr_essay.shape)
print('Shape of normalized title in train data', tr_title.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay.shape)
print('Shape of normalized title in test data', ts_title.shape)
for i in tqdm(range(len(vectorizer_bow.get_feature_names()))):
list_features.append(vectorizer_bow.get_feature_names()[i])
for i in tqdm(range(len(vectorizer_bowt.get_feature_names()))):
list_features.append(vectorizer_bowt.get_feature_names()[i])
len(list_features)
### BoW in Essay and Title on Train
# # We are considering only the bigram words which appeared in at least 10 documents with max feature = 5000(rows or projects).
vectorizer_bow = TfidfVectorizer(min_df=10, max_features=5000)
tr_essay = vectorizer_bow.fit_transform(tr_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on train",tr_essay.shape)
# # Similarly you can vectorize for title also
vectorizer_bowt = TfidfVectorizer(min_df=10, max_features=5000)
tr_title = vectorizer_bowt.fit_transform(tr_X['clean_project_title'].values)
print("Shape of title matrix after one hot encodig ",tr_title.shape)
### BoW in Essay and Title on Test
print('===========================================================\n')
ts_essay = vectorizer_bow.transform(ts_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on test",ts_essay.shape)
ts_title = vectorizer_bowt.transform(ts_X['clean_project_title'].values)
print("Shape of title matrix after one hot encodig on test",ts_title.shape)
print('Shape of normalized essay in train data', tr_essay.shape)
print('Shape of normalized title in train data', tr_title.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay.shape)
print('Shape of normalized title in test data', ts_title.shape)
for i in tqdm(range(len(vectorizer_bow.get_feature_names()))):
list_features.append(vectorizer_bow.get_feature_names()[i])
for i in tqdm(range(len(vectorizer_bowt.get_feature_names()))):
list_features.append(vectorizer_bowt.get_feature_names()[i])
len(list_features)
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_essay = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_essay.append(vector)
avg_w2v_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_title.append(vector)
tr_essay = np.array(avg_w2v_essay)
tr_title = np.array(avg_w2v_title)
print('======================== Train Essay ===========================')
print(len(avg_w2v_essay))
print(len(avg_w2v_essay[0]))
print('======================= Train Title ===============================')
print(len(avg_w2v_title))
print(len(avg_w2v_title[0]))
# print(avg_w2v_essay[0])
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_essay = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_essay.append(vector)
avg_w2v_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_title.append(vector)
ts_essay = np.array(avg_w2v_essay)
ts_title = np.array(avg_w2v_title)
print('======================== Test Essay ===========================')
print(len(avg_w2v_essay))
print(len(avg_w2v_essay[0]))
print('======================= Test Title ===============================')
print(len(avg_w2v_title))
print(len(avg_w2v_title[0]))
# print(avg_w2v_essay[0])
# Tfidf weighted w2v on essay in train
tfidf_model = TfidfVectorizer()
tfidf_model.fit(tr_X['clean_essay'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
# tfidf Word2Vec
# compute average word2vec for each essay
tfidf_w2v_essay = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_essay.append(vector)
tr_essay = np.array(tfidf_w2v_essay)
# compute average word2vec for each essay for test data
tfidf_w2v_essay = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_essay.append(vector)
ts_essay = np.array(tfidf_w2v_essay)
# tfidf Word2Vec on title
# compute average word2vec for each title for train data
tfidf_model = TfidfVectorizer()
tfidf_model.fit(tr_X['clean_project_title'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
tfidf_w2v_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_title.append(vector)
tr_title = np.array(tfidf_w2v_title)
# compute average word2vec for each title for test data
tfidf_w2v_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_title.append(vector)
ts_title = np.array(tfidf_w2v_title)
print('Train essay and title shape:',tr_essay.shape,tr_title.shape)
print('Test essay and title shape:',ts_essay.shape,ts_title.shape)
quantity_normalized.T.shape, price_normalized.T.shape, teacher_number_of_previously_posted_projects_normalized.T.shape
school_state_one_hot.shape, categories_one_hot.shape, subcategories_one_hot.shape, project_grade_category_one_hot.shape, \
teacher_prefix_one_hot.shape
tr_essay.shape, tr_title.shape
# for train data
from scipy.sparse import hstack
tr_X = hstack((quantity_normalized.T, price_normalized.T, teacher_number_of_previously_posted_projects_normalized.T, \
school_state_one_hot, categories_one_hot, subcategories_one_hot, project_grade_category_one_hot, \
teacher_prefix_one_hot, tr_essay, tr_title))
tr_X.shape
tr_X = tr_X.toarray()
tr_X
tr_X.shape, tr_y.shape
# for test data
ts_text = ts_X
ts_X = hstack((ts_quantity.T, ts_price.T, ts_teacher_number_of_previously_posted_projects.T, ts_school_state, \
ts_project_subject_category, ts_project_subject_subcategory,ts_project_grade_category, \
ts_teacher_prefix, ts_essay, ts_title))
ts_X.shape
ts_X = ts_X.toarray()
ts_X.shape, ts_y.shape
# check whether data still contain NaN or infinity or not
np.any(np.isnan(tr_X)), np.any(np.isnan(ts_X))
np.all(np.isfinite(tr_X)), np.all(np.isfinite(ts_X))
len(list_features)
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def find_best_threshold(threshould, fpr, tpr):
t = threshould[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
return t
def predict_with_best_t(proba, threshould):
predictions = []
for i in proba:
if i>=threshould:
predictions.append(1)
else:
predictions.append(0)
return predictions
def plot_cm(feature_names, tr_thresholds, train_fpr, train_tpr, y_train, y_train_pred, y_test, y_test_pred):
"""
Parameters:
feature_name - (string) Write feature to print the plot title
tr_thresolds - train threshold value
train_fpr = FPR for train data
train_tpr - TPR for train data
y_true - test class data
y_pred - test prediction value
Return:
Plot the confusion matrix for Train and Test Data
"""
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
print("Train confusion matrix")
cm = metrics.confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True, fmt="d")
plt.xlabel('Predicted Class')
plt.ylabel('True Class')
plt.title('Confusion matrix for Train Data when DecisionTree with {0} features'.format(feature_names))
print("Test confusion matrix")
cm = metrics.confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True, fmt="d")
plt.xlabel('Predicted Class')
plt.ylabel('True Class')
plt.title('Confusion matrix for Test Data when DecisionTree with {0} features'.format(feature_names))
clf = DecisionTreeClassifier(class_weight='balanced', random_state=1)
parameters = {'max_depth':[1,5,10,50,100,500,1000], \
'min_samples_split':[5,10,100,500]}
# Please write all the code with proper documentation
clf = GridSearchCV(clf, parameters, cv=3, scoring='roc_auc', verbose=3)
clf.fit(tr_X, tr_y)
best_d = clf.best_params_['max_depth']
best_split = clf.best_params_['min_samples_split']
best_d, best_split
# Taking max_depth=2 as per task
lr = DecisionTreeClassifier(max_depth=3, class_weight='balanced', min_samples_split=500)
lr.fit(tr_X,tr_y)
from sklearn.externals.six import StringIO
dot_data = StringIO()
tree.export_graphviz(lr, out_file=dot_data, \
filled=True, rounded=True, special_characters=True, \
feature_names=list_features, class_names=['not approved','approved'])
import pydotplus
from IPython.display import Image
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# Plot seaborn heatmap for gridsearchcv: https://www.kaggle.com/arindambanerjee/grid-search-simplified
max_depth_list = list(clf.cv_results_['param_max_depth'].data)
samplesplit_list = list(clf.cv_results_['param_min_samples_split'].data)
plt.figure(1, figsize=(10,10))
plt.subplot(211)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_train_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for Training data')
plt.subplot(212)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_test_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for CV data')
plt.show()
lr = DecisionTreeClassifier(max_depth=best_d, class_weight='balanced', min_samples_split=best_split, random_state=1)
lr.fit(tr_X, tr_y)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = lr.predict_proba(tr_X)[:,1]
y_test_pred = lr.predict_proba(ts_X)[:,1]
train_fpr, train_tpr, tr_thresholds = roc_curve(tr_y, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(ts_y, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC on BoW")
plt.grid()
plt.show()
plot_cm('BoW', tr_thresholds, train_fpr, train_tpr, tr_y, y_train_pred, ts_y, y_test_pred)
# Predict test data wih best thresold value
ts_predict = predict_with_best_t(lr.predict_proba(ts_X)[:,1], 0.475)
false_datapoints = []
# Iterate over all data points in test data
for i in range(ts_X.shape[0]):
# Select that data point where test true value is 0 and test predicted value is 1
if (ts_y[i] == 0) and (ts_predict[i] == 1):
# If it true, then put that datapoint into another array
false_datapoints.append(ts_text.iloc[i].values)
false_datapoints = pd.DataFrame(data=false_datapoints, columns=ts_text.columns)
false_datapoints.shape
# https://www.geeksforgeeks.org/generating-word-cloud-python/
# WordCloud on Essay
from wordcloud import WordCloud
comment_words = ' '
for val in false_datapoints['clean_essay']:
# typecaste each val to string
val = str(val)
# split the value
tokens = val.split()
# Converts each token into lowercase
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
for words in tokens:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
stopwords = stopwords,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
# BoxPlot on test price
plt.boxplot(false_datapoints['price'])
plt.title('BoxPlot on Price in False Positive Dataset')
plt.show()
# PDF on teacher_number_of_previously_posted_projects
sns.distplot(false_datapoints['teacher_number_of_previously_posted_projects'], hist=True, kde=True)
# Please write all the code with proper documentation
clf = GridSearchCV(clf, parameters, cv=3, scoring='roc_auc', verbose=3)
clf.fit(tr_X, tr_y)
best_d = clf.best_params_['max_depth']
best_split = clf.best_params_['min_samples_split']
best_d, best_split
# Taking max_depth=2 as per task
lr = DecisionTreeClassifier(max_depth=3, class_weight='balanced', min_samples_split=best_split)
lr.fit(tr_X,tr_y)
from sklearn.externals.six import StringIO
dot_data = StringIO()
tree.export_graphviz(lr, out_file=dot_data, \
filled=True, rounded=True, special_characters=True, \
feature_names=list_features, class_names=['not approved','approved'])
import pydotplus
from IPython.display import Image
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# Plot seaborn heatmap for gridsearchcv: https://www.kaggle.com/arindambanerjee/grid-search-simplified
max_depth_list = list(clf.cv_results_['param_max_depth'].data)
samplesplit_list = list(clf.cv_results_['param_min_samples_split'].data)
plt.figure(1, figsize=(10,10))
plt.subplot(211)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_train_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for Training data')
plt.subplot(212)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_test_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for CV data')
plt.show()
lr = DecisionTreeClassifier(max_depth=best_d, class_weight='balanced', min_samples_split=best_split, random_state=1)
lr.fit(tr_X, tr_y)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = lr.predict_proba(tr_X)[:,1]
y_test_pred = lr.predict_proba(ts_X)[:,1]
train_fpr, train_tpr, tr_thresholds = roc_curve(tr_y, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(ts_y, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC on TFIDF")
plt.grid()
plt.show()
plot_cm('TFIDF', tr_thresholds, train_fpr, train_tpr, tr_y, y_train_pred, ts_y, y_test_pred)
# Predict test data wih best thresold value
ts_predict = predict_with_best_t(lr.predict_proba(ts_X)[:,1], 0.469)
false_datapoints = []
# Iterate over all data points in test data
for i in range(ts_X.shape[0]):
# Select that data point where test true value is 0 and test predicted value is 1
if (ts_y[i] == 0) and (ts_predict[i] == 1):
# If it true, then put that datapoint into another array
false_datapoints.append(ts_text.iloc[i].values)
false_datapoints = pd.DataFrame(data=false_datapoints, columns=ts_text.columns)
false_datapoints.shape
# https://www.geeksforgeeks.org/generating-word-cloud-python/
# WordCloud on Essay
from wordcloud import WordCloud
comment_words = ' '
for val in false_datapoints['clean_essay']:
# typecaste each val to string
val = str(val)
# split the value
tokens = val.split()
# Converts each token into lowercase
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
for words in tokens:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
stopwords = stopwords,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
# BoxPlot on test price
plt.boxplot(false_datapoints['price'])
plt.title('BoxPlot on Price in False Positive Dataset')
plt.show()
# PDF on teacher_number_of_previously_posted_projects
sns.distplot(false_datapoints['teacher_number_of_previously_posted_projects'], hist=True, kde=True)
# Please write all the code with proper documentation
clf = GridSearchCV(clf, parameters, cv=3, scoring='roc_auc', verbose=3)
clf.fit(tr_X, tr_y)
best_d = clf.best_params_['max_depth']
best_split = clf.best_params_['min_samples_split']
best_d, best_split
# Plot seaborn heatmap for gridsearchcv: https://www.kaggle.com/arindambanerjee/grid-search-simplified
max_depth_list = list(clf.cv_results_['param_max_depth'].data)
samplesplit_list = list(clf.cv_results_['param_min_samples_split'].data)
plt.figure(1, figsize=(10,10))
plt.subplot(211)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_train_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for Training data')
plt.subplot(212)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_test_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for CV data')
plt.show()
lr = DecisionTreeClassifier(max_depth=best_d, class_weight='balanced', min_samples_split=best_split, random_state=1)
lr.fit(tr_X, tr_y)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = lr.predict_proba(tr_X)[:,1]
y_test_pred = lr.predict_proba(ts_X)[:,1]
train_fpr, train_tpr, tr_thresholds = roc_curve(tr_y, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(ts_y, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC on AVGW2V")
plt.grid()
plt.show()
plot_cm('AVGW2V', tr_thresholds, train_fpr, train_tpr, tr_y, y_train_pred, ts_y, y_test_pred)
# Predict test data wih best thresold value
ts_predict = predict_with_best_t(y_test_pred, 0.482)
false_datapoints = []
# Iterate over all data points in test data
for i in range(ts_X.shape[0]):
# Select that data point where test true value is 0 and test predicted value is 1
if (ts_y[i] == 0) and (ts_predict[i] == 1):
# If it true, then put that datapoint into another array
false_datapoints.append(ts_text.iloc[i].values)
false_datapoints = pd.DataFrame(data=false_datapoints, columns=ts_text.columns)
false_datapoints.shape
# https://www.geeksforgeeks.org/generating-word-cloud-python/
# WordCloud on Essay
from wordcloud import WordCloud
comment_words = ' '
for val in false_datapoints['clean_essay']:
# typecaste each val to string
val = str(val)
# split the value
tokens = val.split()
# Converts each token into lowercase
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
for words in tokens:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
stopwords = stopwords,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
# BoxPlot on test price
plt.boxplot(false_datapoints['price'])
plt.title('BoxPlot on Price in False Positive Dataset')
plt.show()
# PDF on teacher_number_of_previously_posted_projects
sns.distplot(false_datapoints['teacher_number_of_previously_posted_projects'], hist=True, kde=True)
# Please write all the code with proper documentation
clf = GridSearchCV(clf, parameters, cv=3, scoring='roc_auc', verbose=3)
clf.fit(tr_X, tr_y)
best_d = clf.best_params_['max_depth']
best_split = clf.best_params_['min_samples_split']
best_d, best_split
# Plot seaborn heatmap for gridsearchcv: https://www.kaggle.com/arindambanerjee/grid-search-simplified
max_depth_list = list(clf.cv_results_['param_max_depth'].data)
samplesplit_list = list(clf.cv_results_['param_min_samples_split'].data)
plt.figure(1, figsize=(10,10))
plt.subplot(211)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_train_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for Training data')
plt.subplot(212)
data = pd.DataFrame(data={'Max Depth':max_depth_list, \
'Min Sample Split':samplesplit_list , \
'AUC':clf.cv_results_['mean_test_score']})
data = data.pivot(index='Max Depth', columns='Min Sample Split', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for CV data')
plt.show()
lr = DecisionTreeClassifier(max_depth=best_d, class_weight='balanced', min_samples_split=best_split, random_state=1)
lr.fit(tr_X, tr_y)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = lr.predict_proba(tr_X)[:,1]
y_test_pred = lr.predict_proba(ts_X)[:,1]
train_fpr, train_tpr, tr_thresholds = roc_curve(tr_y, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(ts_y, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC on TFIDFW2V")
plt.grid()
plt.show()
plot_cm('TFIDFW2V', tr_thresholds, train_fpr, train_tpr, tr_y, y_train_pred, ts_y, y_test_pred)
# Predict test data wih best thresold value
ts_predict = predict_with_best_t(lr.predict_proba(ts_X)[:,1], 0.5)
false_datapoints = []
# Iterate over all data points in test data
for i in range(ts_X.shape[0]):
# Select that data point where test true value is 0 and test predicted value is 1
if (ts_y[i] == 0) and (ts_predict[i] == 1):
# If it true, then put that datapoint into another array
false_datapoints.append(ts_text.iloc[i].values)
false_datapoints = pd.DataFrame(data=false_datapoints, columns=ts_text.columns)
false_datapoints.shape
# https://www.geeksforgeeks.org/generating-word-cloud-python/
# WordCloud on Essay
from wordcloud import WordCloud
comment_words = ' '
for val in false_datapoints['clean_essay']:
# typecaste each val to string
val = str(val)
# split the value
tokens = val.split()
# Converts each token into lowercase
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
for words in tokens:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
stopwords = stopwords,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
# BoxPlot on test price
plt.boxplot(false_datapoints['price'])
plt.title('BoxPlot on Price in False Positive Dataset')
plt.show()
# PDF on teacher_number_of_previously_posted_projects
sns.distplot(false_datapoints['teacher_number_of_previously_posted_projects'], hist=True, kde=True)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
# Before proceed here, get compile and run again in [SET 2] on TFIDF Feature
# taking top 5k feature importance
feature_imp5k = np.flip(np.argsort(lr.feature_importances_))[:5000]
# Dimension of this variable
feature_imp5k.shape
# print the feature importance column number
feature_imp5k
tr_5k = tr_X[:,feature_imp5k]
ts_5k = ts_X[:,feature_imp5k]
tr_5k.shape, ts_5k.shape
# Training the Logistic Regression Model for 5k feature importance
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(class_weight='balanced', random_state=1)
parameters = {'C':[10**-4,10**-3,10**-2,10**-1,1,10,100,10**3,10**4],
'penalty':['l1','l2']}
clf = GridSearchCV(clf, parameters, cv=3, scoring='roc_auc', verbose=3)
clf.fit(tr_5k, tr_y)
best_C = clf.best_params_['C']
best_penalty = clf.best_params_['penalty']
best_C, best_penalty
# Plot seaborn heatmap for gridsearchcv: https://www.kaggle.com/arindambanerjee/grid-search-simplified
C_list = list(clf.cv_results_['param_C'].data)
penalty_list = list(clf.cv_results_['param_penalty'].data)
plt.figure(1, figsize=(10,10))
plt.subplot(211)
data = pd.DataFrame(data={'C':C_list, \
'Penalty':penalty_list , \
'AUC':clf.cv_results_['mean_train_score']})
data = data.pivot(index='C', columns='Penalty', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for Training data')
plt.subplot(212)
data = pd.DataFrame(data={'C':C_list, \
'Penalty':penalty_list , \
'AUC':clf.cv_results_['mean_test_score']})
data = data.pivot(index='C', columns='Penalty', values='AUC')
sns.heatmap(data, annot=True).set_title('AUC for CV data')
plt.show()
lr = LogisticRegression(C=best_C, class_weight='balanced', penalty=best_penalty, random_state=1)
lr.fit(tr_5k, tr_y)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
# Since LinearSVC doesnt have predict_proba attribute: https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html
y_train_pred = lr.predict_proba(tr_5k)[:,1]
y_test_pred = lr.predict_proba(ts_5k)[:,1]
train_fpr, train_tpr, tr_thresholds = roc_curve(tr_y, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(ts_y, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC For 5k feature impotance with LogisticRegr Model")
plt.grid()
plt.show()
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
print("Train confusion matrix")
cm = metrics.confusion_matrix(tr_y, predict_with_best_t(y_train_pred, best_t))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True, fmt="d")
plt.xlabel('Predicted Class')
plt.ylabel('True Class')
plt.title('Confusion matrix for Train Data when LogisticRegr with 5K Feature Imp features')
cm = metrics.confusion_matrix(ts_y, predict_with_best_t(y_test_pred, 0.498))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True, fmt="d")
plt.xlabel('Predicted Class')
plt.ylabel('True Class')
plt.title('Confusion matrix for Test Data when LogisticRegr with 5K Feature Imp features')
# Predict test data wih best thresold value
ts_predict = predict_with_best_t(lr.predict_proba(ts_5k)[:,1], 0.498)
false_datapoints = []
# Iterate over all data points in test data
for i in range(ts_5k.shape[0]):
# Select that data point where test true value is 0 and test predicted value is 1
if (ts_y[i] == 0) and (ts_predict[i] == 1):
# If it true, then put that datapoint into another array
false_datapoints.append(ts_text.iloc[i].values)
false_datapoints = pd.DataFrame(data=false_datapoints, columns=ts_text.columns)
false_datapoints.shape
# https://www.geeksforgeeks.org/generating-word-cloud-python/
# WordCloud on Essay
from wordcloud import WordCloud
comment_words = ' '
for val in false_datapoints['clean_essay']:
# typecaste each val to string
val = str(val)
# split the value
tokens = val.split()
# Converts each token into lowercase
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
for words in tokens:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
stopwords = stopwords,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
# BoxPlot on test price
plt.boxplot(false_datapoints['price'])
plt.title('BoxPlot on Price in False Positive Dataset')
plt.show()
# PDF on teacher_number_of_previously_posted_projects
sns.distplot(false_datapoints['teacher_number_of_previously_posted_projects'], hist=True, kde=True)
# Please compare all your models using Prettytable library
# http://zetcode.com/python/prettytable/
from prettytable import PrettyTable
x = PrettyTable()
x.field_names = ["Features", "Model", "max_depth", "min_sample_split", "Maximum AUC score",]
x.add_row(["BoW","DT", 10, 500,0.66651])
x.add_row(["TFIDF","DT", 10, 500, 0.6676])
x.add_row(["AVG W2V","DT", 5, 500, 0.64207])
x.add_row(["TFIDF W2V","DT", 5,500, 0.63712])
print(x)
y = PrettyTable()
y.field_names = ["Features", "Model", "C", "penalty", "Maximum AUC score",]
y.add_row(["5k Feature Importance from DT","LogisticRegr", 1,"l1", 0.69757])
print(y)